Domain adaptive detection aims to improve the generalization of detectors on target domain. To reduce discrepancy in feature distributions between two domains, recent approaches achieve domain adaption through feature alignment in different granularities via adversarial learning. However, they neglect the relationship between multiple granularities and different features in alignment, degrading detection. Addressing this, we introduce a unified multi-granularity alignment (MGA)-based detection framework for domain-invariant feature learning. The key is to encode the dependencies across different granularities including pixel-, instance-, and category-levels simultaneously to align two domains. Specifically, based on pixel-level features, we first develop an omni-scale gated fusion (OSGF) module to aggregate discriminative representations of instances with scale-aware convolutions, leading to robust multi-scale detection. Besides, we introduce multi-granularity discriminators to identify where, either source or target domains, different granularities of samples come from. Note that, MGA not only leverages instance discriminability in different categories but also exploits category consistency between two domains for detection. Furthermore, we present an adaptive exponential moving average (AEMA) strategy that explores model assessments for model update to improve pseudo labels and alleviate local misalignment problem, boosting detection robustness. Extensive experiments on multiple domain adaption scenarios validate the superiority of MGA over other approaches on FCOS and Faster R-CNN detectors. Code will be released at https://github.com/tiankongzhang/MGA.
translated by 谷歌翻译
Time series anomaly detection strives to uncover potential abnormal behaviors and patterns from temporal data, and has fundamental significance in diverse application scenarios. Constructing an effective detection model usually requires adequate training data stored in a centralized manner, however, this requirement sometimes could not be satisfied in realistic scenarios. As a prevailing approach to address the above problem, federated learning has demonstrated its power to cooperate with the distributed data available while protecting the privacy of data providers. However, it is still unclear that how existing time series anomaly detection algorithms perform with decentralized data storage and privacy protection through federated learning. To study this, we conduct a federated time series anomaly detection benchmark, named FedTADBench, which involves five representative time series anomaly detection algorithms and four popular federated learning methods. We would like to answer the following questions: (1)How is the performance of time series anomaly detection algorithms when meeting federated learning? (2) Which federated learning method is the most appropriate one for time series anomaly detection? (3) How do federated time series anomaly detection approaches perform on different partitions of data in clients? Numbers of results as well as corresponding analysis are provided from extensive experiments with various settings. The source code of our benchmark is publicly available at https://github.com/fanxingliu2020/FedTADBench.
translated by 谷歌翻译
The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
The circular coordinates algorithm of de Silva, Morozov, and Vejdemo-Johansson takes as input a dataset together with a cohomology class representing a $1$-dimensional hole in the data; the output is a map from the data into the circle that captures this hole, and that is of minimum energy in a suitable sense. However, when applied to several cohomology classes, the output circle-valued maps can be "geometrically correlated" even if the chosen cohomology classes are linearly independent. It is shown in the original work that less correlated maps can be obtained with suitable integer linear combinations of the cohomology classes, with the linear combinations being chosen by inspection. In this paper, we identify a formal notion of geometric correlation between circle-valued maps which, in the Riemannian manifold case, corresponds to the Dirichlet form, a bilinear form derived from the Dirichlet energy. We describe a systematic procedure for constructing low energy torus-valued maps on data, starting from a set of linearly independent cohomology classes. We showcase our procedure with computational examples. Our main algorithm is based on the Lenstra--Lenstra--Lov\'asz algorithm from computational number theory.
translated by 谷歌翻译
The Shapley value (SV) is adopted in various scenarios in machine learning (ML), including data valuation, agent valuation, and feature attribution, as it satisfies their fairness requirements. However, as exact SVs are infeasible to compute in practice, SV estimates are approximated instead. This approximation step raises an important question: do the SV estimates preserve the fairness guarantees of exact SVs? We observe that the fairness guarantees of exact SVs are too restrictive for SV estimates. Thus, we generalise Shapley fairness to probably approximate Shapley fairness and propose fidelity score, a metric to measure the variation of SV estimates, that determines how probable the fairness guarantees hold. Our last theoretical contribution is a novel greedy active estimation (GAE) algorithm that will maximise the lowest fidelity score and achieve a better fairness guarantee than the de facto Monte-Carlo estimation. We empirically verify GAE outperforms several existing methods in guaranteeing fairness while remaining competitive in estimation accuracy in various ML scenarios using real-world datasets.
translated by 谷歌翻译
3D重建基于少数学习的新型类别在现实世界中具有吸引力,并吸引了不断增长的研究兴趣。先前的方法主要集中于如何为不同类别设计形状的先验模型。他们在看不见的类别上的表现不是很具竞争力。在本文中,我们提出了一个内存的先验对比网络(MPCN),该网络可以在基于几次学习的3D重建框架中存储形状的先验知识。借助形状记忆,提出了一个多头注意模块以捕获候选形状的不同部分,并将这些部分融合在一起,以指导新型类别的3D重建。此外,我们引入了一种3D吸引的对比学习方法,该方法不仅可以补充内存网络的检索准确性,而且还可以更好地组织下游任务的图像功能。与以前的几次3D重建方法相比,MPCN可以处理类间变异性而无需类别注释。基准合成数据集和Pascal3D+现实世界数据集的实验结果表明,我们的模型的表现明显优于当前的最新方法。
translated by 谷歌翻译
在“知识图”(kgs)的表示领域中,超级关系的事实由主要三重和几个辅助属性描述组成,这被认为比基于三重的事实更全面,更具体。但是,由于代表实体之间的隶属关系的层次结构削弱,因此,单个视图中现有的超相关KG嵌入方法受到限制。为了打破这一限制,我们提出了一个双视性超相关kg(DH-kg)结构,该结构包含实体的超相关实例视图,以及对从实体到共同模型超相关的概念的超相关本体论视图和分层信息。在本文中,我们首先定义了DH-KG上的链接预测和实体键入任务,并根据医疗数据构建了两个DH-KG数据集,即从Wikidata和HTDM中提取的JW44K-6K。此外,我们根据Gran编码器,HGNN和联合学习提出了DH-KG嵌入模型DHGE。实验结果表明,DHGE在DH-KG上的表现优于基线模型。我们还提供了该技术在高血压药物领域中应用的示例。我们的模型和数据集公开可用。
translated by 谷歌翻译
将回归系数融合到均匀组中可以揭示在每个组内共享共同值的系数。这种扩展均匀性降低了参数空间的内在尺寸,并释放统计学精度。我们提出并调查了一个名为$ l_0 $ -fusion的新的组合分组方法,这些方法可用于混合整数优化(MIO)。在统计方面,我们识别称为分组灵敏度的基本量,该基本量为恢复真实组的难度。我们展示$ l_0 $ -fusion在分组灵敏度的最弱需求下实现了分组一致性:如果违反了这一要求,则小组拼写的最低风险将无法收敛到零。此外,我们展示了在高维制度中,可以使用无需任何必要的统计效率损失的确保筛选特征,同时降低计算成本的校正特征耦合耦合的$ L_0 $ -Fusion。在算法方面,我们为$ l_0 $ -fusion提供了一个mio配方,以及温暖的开始策略。仿真和实际数据分析表明,在分组准确性方面,$ L_0 $ -FUSUS展示其竞争对手的优势。
translated by 谷歌翻译
由于单个RGB图像的不利低对比度和弱可见性问题,低光图像增强(LLE)仍然具有挑战性。在本文中,我们回应了有趣的学习相关问题 - 如果利用可访问的既可接近的过分配对/曝光过度的图像和高级别的语义指导,可以提高尖端LLE模型的性能?在这里,我们提出了一种有效的语义对比的学习范例(即SCL-LLE)。除了现有的LLE智慧之外,它将图像增强任务施放为多任务联合学习,其中LLE被转换为对比学习,语义亮度一致性的三个约束,同时确保曝光,纹理和颜色一致性。 SCL-LLE允许LLE模型从未配对的阳性(常灯)/否定(过度/曝光),并使其与场景语义进行互动以正规化图像增强网络,但高级语义知识的相互作用并且在以前的方法中很少地研究了低级信号。培训易于获得的开放数据,广泛的实验表明,我们的方法超越了六个独立的交叉场景数据集的最先进的LLE模型。此外,讨论了SCL-LLE在极暗条件下有益于下游语义分割的潜力。源代码:https://github.com/linglix/sclle。
translated by 谷歌翻译
我们提出了一种强化学习(RL)方法来计算准静止分布的表达。基于准静止分布的定点配方,我们最大限度地减少了候选分布引起的两个马尔可夫路径分布的KL分配和真正的目标分布。通过梯度下降来解决这一具有挑战性的最小化问题,我们通过引入相应的奖励和价值函数来应用增强学习技术。我们派生了相应的政策梯度定理和设计演员 - 批评算法,以了解最佳解决方案和价值函数。测试有限状态马尔可夫链的数值例子以展示新方法
translated by 谷歌翻译